Goto

Collaborating Authors

 proposed edit


Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits - Appendix

Neural Information Processing Systems

A.1 Detailed Re-alignment T ask Formulation and Training Setup In Figure A1, we show the procedure for converting the data samples in the alignment datasets into training data of AEM (negative samples used in AIL are generated similarly). Then our decipher module will translate these special tokens into natural language. For AEM, we fine-tune the LM with the above-mentioned Source-CoE-Target data (as shown in Figure A1, "Input for AEM") with the common language modeling objective, which is to maximize the probability of generating ground truth tokens at each decoding step. We train with three epochs for each task by default but set an early-stopping condition when the evaluation loss does not decrease (i.e., plateaus) for five intermediate evaluation steps. LM can know the boundary between Context + Source and Chain-of-Edits (CoEs) + Target.


Second Thoughts are Best Learning to Re Align With Human Values from Text Edits Appendix

Neural Information Processing Systems

A.1 Detailed Re-alignment Task Formulation and Training Setup In Figure A1, we show the procedure for converting the data samples in the alignment datasets into training data of AEM (negative samples used in AIL are generated similarly). In DP-inferred chain-of-edits (CoEs), we use a few special tokens to mark the editing operations (with their position and content). Then our decipher module will translate these special tokens into natural language. As the final step, we add a special token [SEP] between Context + Source and the ground truth Chain-of-Edits (CoEs) and Target, as a boundary signal similar to the settings in text-to-text training. We also augment the data by using different sets of costs for the editing operations (as discussed in Section 3.2, and footnote 3).